Goto

Collaborating Authors

 defense innovation board


Who Followed the Blueprint? Analyzing the Responses of U.S. Federal Agencies to the Blueprint for an AI Bill of Rights

Lage, Darren, Pruitt, Riley, Arnold, Jason Ross

arXiv.org Artificial Intelligence

This study examines the extent to which U.S. federal agencies responded to and implemented the principles outlined in the White House's October 2022 "Blueprint for an AI Bill of Rights." The Blueprint provided a framework for the ethical governance of artificial intelligence systems, organized around five core principles: safety and effectiveness, protection against algorithmic discrimination, data privacy, notice and explanation about AI systems, and human alternatives and fallback. Through an analysis of publicly available records across 15 federal departments, the authors found limited evidence that the Blueprint directly influenced agency actions after its release. Only five departments explicitly mentioned the Blueprint, while 12 took steps aligned with one or more of its principles. However, much of this work appeared to have precedents predating the Blueprint or motivations disconnected from it, such as compliance with prior executive orders on trustworthy AI. Departments' activities often emphasized priorities like safety, accountability and transparency that overlapped with Blueprint principles, but did not necessarily stem from it. The authors conclude that the non-binding Blueprint seems to have had minimal impact on shaping the U.S. government's approach to ethical AI governance in its first year. Factors like public concerns after high-profile AI releases and obligations to follow direct executive orders likely carried more influence over federal agencies. More rigorous study would be needed to definitively assess the Blueprint's effects within the federal bureaucracy and broader society.


DOD Adopts Ethical Principles for Artificial Intelligence

#artificialintelligence

The U.S. Department of Defense officially adopted a series of ethical principles for the use of Artificial Intelligence today following recommendations provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board last October. The recommendations came after 15 months of consultation with leading AI experts in commercial industry, government, academia and the American public that resulted in a rigorous process of feedback and analysis among the nation's leading AI experts with multiple venues for public input and comment. "The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order," said Secretary Esper. "AI technology will change much about the battlefield of the future, but nothing will change America's steadfast commitment to responsible and lawful behavior. The adoption of AI ethical principles will enhance the department's commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the U.S. military's strong history of applying rigorous testing and fielding standards for technology innovations."


Google Cloud secures U.S. military AI cancer research contract

#artificialintelligence

Google Cloud announced today that it landed a contract to supply Veterans Affairs hospitals and Defense Health Agency treatment facilities with AI for predictive cancer and disease diagnosis. The contract comes from the Defense Innovation Unit (DIU), a Pentagon organization that brings consumer technology into the military. "The initial rollout will take place at select Defense Health Agency treatment facilities and Veteran's Affairs hospitals in the United States, with future plans to expand across the broader U.S. Military Health System," a Google Cloud post reads. "The AI-based models used to assist doctors as part of the prototype were developed from public and private datasets that were de-identified to remove personal health information and any personally identifiable information. All patient diagnostic data will solely be managed by the individual hospital or provider."


Silicon Valley execs and Pentagon AI chief talk AI at the edge

#artificialintelligence

When considering transformational ways to use computer vision on the edge in devices like robots, drones, cameras, and other devices, Booz Allen Hamilton VP Josh Sullivan advises caution, urging people to take security seriously on what's become a whole new attack vector. "For me, deploying an AI model in your IT environment is an entirely new attack vector. I've seen a model working correctly that can identify tanks and other military equipment be fooled into seeing a school bus because someone sent poisoned data into the model," he said. Failure to keep models secure can lead to adversarial machine learning attacks to make malicious code appear as benign or a range of other bad outcomes. Sullivan was joined in conversation at VentureBeat's Transform 2020 conference by Nvidia VP of federal initiatives Anthony Robbins, Intel IoT VP Stacey Shulman, and Joint AI Center acting director Nand Mulchandani.


AI For National Security And The Challenge Of China

#artificialintelligence

This article has been adapted from the podcast, Eye on AI. In 2017, China announced its goal to become the world leader in AI by 2030. The US responded by creating a commission to review America's competitive position and to advise Congress on what steps are needed to maintain US leadership in this important field. Former Google chief executive Eric Schmidt and former Deputy Defense Secretary Bob Work were chosen from among fifteen appointed commissioners to lead the work. Earlier this month, the commission issued its first set of recommendations to Congress.


U.S. Department of Defense Adopts Ethical Principles for Artificial Intelligence Defense Media Network

#artificialintelligence

The U.S. Department of Defense officially adopted a series of ethical principles for the use of Artificial Intelligence today following recommendations provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board last October. The recommendations came after 15 months of consultation with leading AI experts in commercial industry, government, academia and the American public that resulted in a rigorous process of feedback and analysis among the nation's leading AI experts with multiple venues for public input and comment. "The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order," said Secretary Esper. "AI technology will change much about the battlefield of the future, but nothing will change America's steadfast commitment to responsible and lawful behavior. The adoption of AI ethical principles will enhance the department's commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the U.S. military's strong history of applying rigorous testing and fielding standards for technology innovations."


The U.S. military, algorithmic warfare, and big tech

#artificialintelligence

We learned this week that the Department of Defense is using facial recognition at scale, and Secretary of Defense Mark Esper said he believes China is selling lethal autonomous drones. Amid all that, you may have missed Joint AI Center (JAIC) director Lieutenant General Jack Shanahan -- who is charged by the Pentagon with modernizing and guiding artificial intelligence directives -- talking about a future of algorithmic warfare. Algorithmic warfare, which could dramatically change warfare as we know it, is built on the assumption that combat actions will happen faster than humans' ability to make decisions. Shanahan says algorithmic warfare would thus require some reliance on AI systems, though he stresses a need to implement rigorous testing and evaluation before using AI in the field to ensure it doesn't "take on a life of its own, so to speak." "We are going to be shocked by the speed, the chaos, the bloodiness, and the friction of a future fight in which this will be playing out, maybe in microseconds at times. How do we envision that fight happening? It has to be algorithm against algorithm," Shanahan said during a conversation with former Google CEO Eric Schmidt and Google VP of global affairs Kent Walker.


Reid Hoffman on AI, defense, and ethics when scaling a startup

#artificialintelligence

LinkedIn cofounder and Greylock Partners investor Reid Hoffman tells executives who are running startups that scale fast -- the kind who want to double in size every few months -- to build ethics into their businesses. As companies plan for the future and grow their engineering or sales ranks, they should consider what can go wrong, he said, and hire people whose job is dedicated to risk management. Next, he added, companies can develop a risk framework to sort risk levels. Anything that can be a catastrophic risk to individuals, a systemic risk to company systems, or a risk to a large number of users should be handled in a proactive way to stay competitive with other startups. Hoffman, who coauthored the book Blitzscaling, joined former White House chief data scientist DJ Patil and Stanford University political science professor Amy Zegart Tuesday at the Stanford Human-Centered AI Intelligence (HAI) fall conference on AI ethics, governance, and policy symposium at the Hoover Institution in Palo Alto.


A group of tech executives warn military about unintended harm caused by AI in combat

Daily Mail - Science & tech

This week, the Defense Innovation Board issued a series of recommendations to the Department of Defense on how artificial intelligence should be implemented in future military conflict. The Defense Innovation Board was first created in 2016 to establish a series of best practices on potential collaborations between the US military and Silicon Valley. There are sixteen current board members from a broad number of disciplines, including former Google CEO Eric Schmidt, Facebook executive Marne Levine, Microsoft's Chief Digital Officer Kurt Delbene, astrophysicist Neil deGrasse Tyson, Steve Jobs biographer Walter Isaacson, and LinkedIn co-founded Reid Hoffman. 'Now is the time, at this early stage of the resurgence of interest in AI, to hold serious discussions about norms of AI development and use in a military context--long before there has been an incident.' the report says. The report says that using AI for military actions or decision-making comes with'the duty to take feasible precautions to reduce the risk of harm to the civilian population and other protected persons and objects.'


Defense Innovation Board unveils AI ethics principles for the Pentagon

#artificialintelligence

The Defense Innovation Board, a panel of 16 prominent technologists advising the Pentagon, today voted to approve AI ethics principles for the Department of Defense. The report includes 12 recommendations for how the U.S. military can apply ethics in the future for both combat and non-combat AI systems. The principles are broken into five main principles: responsible, equitable, traceable, reliable, and governable. The principles state that humans should remain responsible for "developments, deployments, use and outcomes," and AI systems used by the military should be free of bias that can lead to unintended human harm. AI deployed by the DoD should also be reliable, governable, and use "transparent and auditable methodologies, data sources, and design procedure and documentation."